Definition (verbatim from ChatGPT)
Yes, there is a single loss function commonly referred to as “contrastive loss,” originally introduced by Hadsell et al. (2006) in the context of learning embeddings. This specific contrastive loss function is defined for pairs of inputs and encourages the embeddings of similar pairs to be close together while ensuring dissimilar pairs are farther apart. The formula is:
Here:
indicates whether the pair is similar ( ) or dissimilar ( ). is the learned embedding function. is a margin defining the minimum distance for dissimilar pairs.
While this specific “contrastive loss” is well-established, other loss functions like triplet loss, hinge loss, or InfoNCE are often considered “contrastive loss functions” in the broader sense, as they also aim to learn embeddings by contrasting similar and dissimilar pairs or samples.